skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Hilburn, Kyle"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract AI-based algorithms are emerging in many meteorological applications that produce imagery as output, including for global weather forecasting models. However, the imagery produced by AI algorithms, especially by convolutional neural networks (CNNs), is often described as too blurry to look realistic, partly because CNNs tend to represent uncertainty as blurriness. This blurriness can be undesirable since it might obscure important meteorological features. More complex AI models, such as Generative AI models, produce images that appear to be sharper. However, improved sharpness may come at the expense of a decline in other performance criteria, such as standard forecast verification metrics. To navigate any trade-off between sharpness and other performance metrics it is important to quantitatively assess those other metrics along with sharpness. While there is a rich set of forecast verification metrics available for meteorological images, none of them focus on sharpness. This paper seeks to fill this gap by 1) exploring a variety of sharpness metrics from other fields, 2) evaluating properties of these metrics, 3) proposing the new concept of Gaussian Blur Equivalence as a tool for their uniform interpretation, and 4) demonstrating their use for sample meteorological applications, including a CNN that emulates radar imagery from satellite imagery (GREMLIN) and an AI-based global weather forecasting model (GraphCast). 
    more » « less
    Free, publicly-accessible full text available June 9, 2026
  2. Abstract Numerous artificial intelligence-based weather prediction (AIWP) models have emerged over the past 2 years, mostly in the private sector. There is an urgent need to evaluate these models from a meteorological perspective, but access to the output of these models is limited. We detail two new resources to facilitate access to AIWP model output data in the hope of accelerating the investigation of AIWP models by the meteorological community. First, a 3-yr (and growing) reforecast archive beginning in October 2020 containing twice daily 10-day forecasts forFourCastNet v2-small,Pangu-Weather, andGraphCast Operationalis now available via an Amazon Simple Storage Service (S3) bucket through NOAA’s Open Data Dissemination (NODD) program (https://noaa-oar-mlwp-data.s3.amazonaws.com/index.html). This reforecast archive was initialized with both the NOAA’s Global Forecast System (GFS) and ECMWF’s Integrated Forecasting System (IFS) initial conditions in the hope that users can begin to perform the feature-based verification of impactful meteorological phenomena. Second, real-time output for these three models is visualized on our web page (https://aiweather.cira.colostate.edu) along with output from the GFS and the IFS. This allows users to easily compare output between each AIWP model and traditional, physics-based models with the goal of familiarizing users with the characteristics of AIWP models and determine whether the output aligns with expectations, is physically consistent and reasonable, and/or is trustworthy. We view these two efforts as a first step toward evaluating whether these new AIWP tools have a place in forecast operations. 
    more » « less
  3. Accurate precipitation retrieval using satellite sensors is still challenging due to the limitations on spatio-temporal sampling of the applied parametric retrieval algorithms. In this research, we propose a deep learning framework for precipitation retrieval using the observations from Advanced Baseline Imager (ABI), and Geostationary Lightning Mapper (GLM) on GOES-R satellite series. In particular, two deep Convolutional Neural Network (CNN) models are designed to detect and estimate the precipitation using the cloud-top brightness temperature from ABI and lightning flash rate from GLM. The precipitation estimates from the ground-based Multi-Radar/Multi-Sensor (MRMS) system are used as the target labels in the training phase. The experimental results show that in the testing phase, the proposed framework offers more accurate precipitation estimates than the current operational Rainfall Rate Quantitative Precipitation Estimate (RRQPE) product from GOES-R. 
    more » « less
  4. Abstract Atmospheric processes involve both space and time. Thus, humans looking at atmospheric imagery can often spot important signals in an animated loop of an image sequence not apparent in an individual (static) image. Utilizing such signals with automated algorithms requires the ability to identify complex spatiotemporal patterns in image sequences. That is a very challenging task due to the endless possibilities of patterns in both space and time. Here, we review different concepts and techniques that are useful to extract spatiotemporal signals from meteorological image sequences to expand the effectiveness of AI algorithms for classification and prediction tasks. We first present two applications that motivate the need for these approaches in meteorology, namely the detection of convection from satellite imagery and solar forecasting. Then we provide an overview of concepts and techniques that are helpful for the interpretation of meteorological image sequences, such as (a) feature engineering methods using (i) meteorological knowledge, (ii) classic image processing, (iii) harmonic analysis, and (iv) topological data analysis; (b) ways to use convolutional neural networks for this purpose with emphasis on discussing different convolution filters (2D/3D/LSTM-convolution); and (c) a brief survey of several other concepts, including the concept of “attention” in neural networks and its utility for the interpretation of image sequences and strategies from self-supervised and transfer learning to reduce the need for large labeled datasets. We hope that presenting an overview of these tools—many of which are not new but underutilized in this context—will accelerate progress in this area. 
    more » « less
  5. Satellite sensors have been widely used for precipitation retrieval, and a number of precipitation retrieval algorithms have been developed using observations from various satellite sensors. The current operational rainfall rate quantitative precipitation estimate (RRQPE) product from the geostationary operational environmental satellite (GOES) offers full disk rainfall rate estimates based on the observations from the advanced baseline imager (ABI) aboard the GOES-R series. However, accurate precipitation retrieval using satellite sensors is still challenging due to the limitations on spatio-temporal sampling of the satellite sensors and/or the uncertainty associated with the applied parametric retrieval algorithms. In this article, we propose a deep learning framework for precipitation retrieval using the combined observations from the ABI and geostationary lightning mapper (GLM) on the GOES-R series to improve the current operational RRQPE product. Particularly, the proposed deep learning framework is composed of two deep convolutional neural networks (CNNs) that are designed for precipitation detection and quantification. The cloud-top brightness temperature from multiple ABI channels and the lightning flash rate from the GLM measurement are used as inputs to the deep learning framework. To train the designed CNNs, the precipitation product multiradar multi-sensor (MRMS) system from the National Oceanic and Atmospheric Administration (NOAA) is used as target labels to optimize the network parameters. The experimental results show that the precipitation retrieval performance of the proposed framework is superior to the currently operational GOES RRQPE product in the selected study domain, and the performance is dramatically enhanced after incorporating the lightning data into the deep learning model. Using the independent MRMS product as a reference, the deep learning model can reduce the retrieval uncertainty in the operational RRQPE product by at least 31% in terms of the mean squared error and normalized mean absolute error, and the improvement is more significant in moderate to heavy rain regions. Therefore, the proposed deep learning framework can potentially serve as an alternative approach for GOES precipitation retrievals. 
    more » « less
  6. null (Ed.)
    Abstract The method of neural networks (aka deep learning) has opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image-to-image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks for working with meteorological images, such as best practices for evaluation, tuning, and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of receptive fields, underutilized meteorological performance measures, and methods for neural network interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative meteorologist-driven discovery process that builds on experimental design and hypothesis generation and testing. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image-to-image translation. 
    more » « less
  7. null (Ed.)
    Producing high-resolution near-real-time forecasts of fire behavior and smoke impact that are useful for fire and air quality management requires accurate initialization of the fire location. One common representation of the fire progression is through the fire arrival time, which defines the time that the fire arrives at a given location. Estimating the fire arrival time is critical for initializing the fire location within coupled fire-atmosphere models. We present a new method that utilizes machine learning to estimate the fire arrival time from satellite data in the form of burning/not burning/no data rasters. The proposed method, based on a support vector machine (SVM), is tested on the 10 largest California wildfires of the 2020 fire season, and evaluated using independent observed data from airborne infrared (IR) fire perimeters. The SVM method results indicate a good agreement with airborne fire observations in terms of the fire growth and a spatial representation of the fire extent. A 12% burned area absolute percentage error, a 5% total burned area mean percentage error, a 0.21 False Alarm Ratio average, a 0.86 Probability of Detection average, and a 0.82 Sørensen’s coefficient average suggest that this method can be used to monitor wildfires in near-real-time and provide accurate fire arrival times for improving fire modeling even in the absence of IR fire perimeters. 
    more » « less
  8. We present an interactive HPC framework for coupled fire and weather simulations. The system is suitable for urgent simulations and forecast of wildfire propagation and smoke. It does not require expert knowledge to set up and run the forecasts. The core of the system is a coupled weather, wildland fire, fuel moisture, and smoke model, running in an interactive workflow and data management system. The system automates job setup, data acquisition, preprocessing, and simulation on an HPC cluster. It provides animated visualization of the results on a dedicated mapping portal in the cloud, and as GIS files or Google Earth KML files. The system also serves as an extensible framework for further research, including data assimilation and applications of machine learning to initialize the simulations from satellite data. Index Terms—WRF-SFIRE, coupled atmosphere-fire model, MODIS, VIIRS, satellite data, fire arrival time, data assimilation, machine learning 
    more » « less